Asymptotic minimax regret for data compression, gambling, and prediction

نویسندگان

  • Qun Xie
  • Andrew R. Barron
چکیده

For problems of data compression, gambling, and prediction of individual sequences 1 the following questions arise. Given a target family of probability mass functions ( 1 ), how do we choose a probability mass function ( 1 ) so that it approximately minimizes the maximum regret /belowdisplayskip10ptminus6pt max (log 1 ( 1 ) log 1 ( 1 )̂) and so that it achieves the best constant in the asymptotics of the minimax regret, which is of the form ( 2) log( 2 ) + + (1), where is the parameter dimension? Are there easily implementable strategies that achieve those asymptotics? And how does the solution to the worst case sequence problem relate to the solution to the corresponding expectation version min max (log 1 ( 1 ) log 1 ( 1 ))? In the discrete memoryless case, with a given alphabet of size , the Bayes procedure with the Dirichlet(1 2 1 2) prior is asymptotically maximin. Simple modifications of it are shown to be asymptotically minimax. The best constant is = log( (1 2) ( ( 2)) which agrees with the logarithm of the integral of the square root of the determinant of the Fisher information. Moreover, our asymptotically optimal strategies for the worst case problem are also asymptotically optimal for the expectation version. Analogous conclusions are given for the case of prediction, gambling, and compression when, for each observation, one has access to side information from an alphabet of size . In this setting the minimax regret is shown to be ( 1) 2 log 2 + + (1)

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Asymptotically minimax regret for exponential families

We study the problem of data compression, gambling and prediction of a sequence x = x1x2...xn from a certain alphabet X , in terms of regret and redundancy with respect to a general exponential family. In particular, we evaluate the regret of the Bayes mixture density and show that it asymptotically achieves their minimax values when variants of Jeffreys prior are used. Keywords— universal codi...

متن کامل

Asymptotically minimax regret by Bayes mixtures - Information Theory, 1998. Proceedings. 1998 IEEE International Symposium on

We study the problem of data compression, gambling and prediction of a sequence zn = z1z2 ... z, from a certain alphabet X , in terms of regret [4] and redundancy with respect to a general exponential family, a general smooth family, and also Markov sources. In particular, we show that variants of Jeffreys mixture asymptotically achieve their minimax values. These results are generalizations of...

متن کامل

Robustness in portfolio optimization based on minimax regret approach

Portfolio optimization is one of the most important issues for effective and economic investment. There is plenty of research in the literature addressing this issue. Most of these pieces of research attempt to make the Markowitz’s primary portfolio selection model more realistic or seek to solve the model for obtaining fairly optimum portfolios. An efficient frontier in the ...

متن کامل

Achievability of Asymptotic Minimax Regret in Online and Batch Prediction

The normalized maximum likelihood model achieves the minimax coding (log-loss) regret for data of fixed sample size n. However, it is a batch strategy, i.e., it requires that n be known in advance. Furthermore, it is computationally infeasible for most statistical models, and several computationally feasible alternative strategies have been devised. We characterize the achievability of asymptot...

متن کامل

Commentary on "The Optimality of Jeffreys Prior for Online Density Estimation and the Asymptotic Normality of Maximum Likelihood Estimators"

In the field of prediction with expert advice, a standard goal is to sequentially predict data as well as the best expert in some reference set of ‘expert predictors’. Universal data compression, a subfield of information theory, can be thought of as a special case. Here, the set of expert predictors is a statistical model, i.e. a family of probability distributions, and the predictions are sco...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

عنوان ژورنال:
  • IEEE Trans. Information Theory

دوره 46  شماره 

صفحات  -

تاریخ انتشار 2000